16 research outputs found

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    Enhanced rotational feature points matching using orientation correction

    Get PDF
    In matching between images, several techniques have been developed particularly for estimating orientation assignment in order to make feature points invariant to rotation. However, imperfect estimation of the orientation assignment may lead to feature mismatching and a low number of correctly matched points. Additionally, several possible candidates with high correlation values for one feature in the reference image may lead to matching confusion. In this paper, we propose a post-processing matching technique that will not only increase the number of correctly matched points but also manage to solve the above mentioned two issues. The key idea is to modify feature orientation based on the relative rotational degree between two images, obtained by taking the difference between the major correctly matched points in the first matching cycle. From the analysis, our proposed method shows that the number of detected points correctly matched with the reference image can be increased by up to 50%. In addition, some mismatched points due to similar correlation values in the first matching round can be corrected. Another advantage of the proposed algorithm it that it can be applied to other state-of-the-art orientation assignment techniques

    Enhanced rotational feature points matching using orientation correction

    Get PDF
    In matching between images, several techniques have been developed particularly for estimating orientation assignment in order to make feature points invariant to rotation. However, imperfect estimation of the orientation assignment may lead to feature mismatching and a low number of correctly matched points. Additionally, several possible candidates with high correlation values for one feature in the reference image may lead to matching confusion. In this paper, we propose a post-processing matching technique that will not only increase the number of correctly matched points but also manage to solve the above mentioned two issues. The key idea is to modify feature orientation based on the relative rotational degree between two images, obtained by taking the difference between the major correctly matched points in the first matching cycle. From the analysis, our proposed method shows that the number of detected points correctly matched with the reference image can be increased by up to 50%. In addition, some mismatched points due to similar correlation values in the first matching round can be corrected. Another advantage of the proposed algorithm it that it can be applied to other state-of-the-art orientation assignment techniques

    HEVC 2D-DCT architectures comparison for FPGA and ASIC implementations

    Get PDF
    This paper compares ASIC and FPGA implementations of two commonly used architectures for 2-dimensional discrete cosine transform (DCT), the parallel and folded architectures. The DCT has been designed for sizes 4x4, 8x8, and 16x16, and implemented on Silterra 180nm ASIC and Xilinx Kintex Ultrascale FPGA. The objective is to determine suitable low energy architectures to be used as their characteristics greatly differ in terms of cells usage, placement and routing methods on these platforms. The parallel and folded DCT architectures for all three sizes have been designed using Verilog HDL, including the basic serializer-deserializer input and output. Results show that for large size transform of 16x16, ASIC parallel architecture results in roughly 30% less energy compared to folded architecture. As for FPGAs, folded architecture results in roughly 34% less energy compared to parallel architecture. In terms of overall energy consumption between 180nm ASIC and Xilinx Ultrascale, ASIC implementation results in about 58% less energy compared to the FPGA

    Recognizing hidden emotions from difference image using mean local mapped pattern

    No full text
    Recent progress in computer vision has pushed the limit of facial recognition from human identification to micro-expressions (MEs). However, the visual analysis of MEs is still a very challenging task because of the short occurrence and insignificant intensity of the underlying signals. To date, the accuracy of recognizing hidden emotions from frames using conventional methods is still far from reaching saturation. To address this, we have proposed a new ME recognition approach based on Mean Local Mapped Pattern (M-LMP) as a texture feature, which outperforms other state-of-the art features in terms of accuracy due to its capability of capturing small pixel transitions. Inspired by previous work, we applied M-LMP to the difference image computed from an onset frame and an apex frame, where the former represents the frame with neutral emotion and the latter consists of the frame with the largest ME intensity. The extracted local features were classified using support vector machine (SVM) and K nearest neighbourhood (KNN) classifiers. The validation of the proposed approach was performed on the CASME II and CAS(ME)2 datasets, and the results were compared with other similar state-of-the-art approaches. Comprehensive experiments were conducted using various parameters to show the robustness of our approach in the imbalanced and small dataset

    Enhanced rotational feature points matching using orientation correction

    Get PDF
    In matching between images, several techniques have been developed particularly for estimating orientation assignment in order to make feature points invariant to rotation. However, imperfect estimation of the orientation assignment may lead to feature mismatching and a low number of correctly matched points. Additionally, several possible candidates with high correlation values for one feature in the reference image may lead to matching confusion. In this paper, we propose a post-processing matching technique that will not only increase the number of correctly matched points but also manage to solve the above mentioned two issues. The key idea is to modify feature orientation based on the relative rotational degree between two images, obtained by taking the difference between the major correctly matched points in the first matching cycle. From the analysis, our proposed method shows that the number of detected points correctly matched with the reference image can be increased by up to 50%. In addition, some mismatched points due to similar correlation values in the first matching round can be corrected. Another advantage of the proposed algorithm it that it can be applied to other state-of-the-art orientation assignment technique

    Enhanced rotational feature points matching using orientation correction

    No full text
    In matching between images, several techniques have been developed particularly for estimating orientation assignment in order to make feature points invariant to rotation. However, imperfect estimation of the orientation assignment may lead to feature mismatching and a low number of correctly matched points. Additionally, several possible candidates with high correlation values for one feature in the reference image may lead to matching confusion. In this paper, we propose a post-processing matching technique that will not only increase the number of correctly matched points but also manage to solve the above mentioned two issues. The key idea is to modify feature orientation based on the relative rotational degree between two images, obtained by taking the difference between the major correctly matched points in the first matching cycle. From the analysis, our proposed method shows that the number of detected points correctly matched with the reference image can be increased by up to 50%. In addition, some mismatched points due to similar correlation values in the first matching round can be corrected. Another advantage of the proposed algorithm it that it can be applied to other state-of-the-art orientation assignment techniques

    Preface (ICIRA 2020)

    No full text
    corecore